Goto

Collaborating Authors

 ai development


The ecosystem of machine learning competitions: Platforms, participants, and their impact on AI development

Nasios, Ioannis

arXiv.org Machine Learning

Machine learning competitions (MLCs) play a pivotal role in advancing artificial intelligence (AI) by fostering innovation, skill development, and practical problem-solving. This study provides a comprehensive analysis of major competition platforms such as Kaggle and Zindi, examining their workflows, evaluation methodologies, and reward structures. It further assesses competition quality, participant expertise, and global reach, with particular attention to demographic trends among top-performing competitors. By exploring the motivations of competition hosts, this paper underscores the significant role of MLCs in shaping AI development, promoting collaboration, and driving impactful technological progress. Furthermore, by combining literature synthesis with platform-level data analysis and practitioner insights a comprehensive understanding of the MLC ecosystem is provided. Moreover, the paper demonstrates that MLCs function at the intersection of academic research and industrial application, fostering the exchange of knowledge, data, and practical methodologies across domains. Their strong ties to open-source communities further promote collaboration, reproducibility, and continuous innovation within the broader ML ecosystem. By shaping research priorities, informing industry standards, and enabling large-scale crowdsourced problem-solving, these competitions play a key role in the ongoing evolution of AI. The study provides insights relevant to researchers, practitioners, and competition organizers, and includes an examination of the future trajectory and sustained influence of MLCs on AI development.


RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

AIHub

RWDS Big Questions: how do we balance innovation and regulation in the world of AI? AI development is accelerating, while regulation moves more deliberately. That tension creates a core challenge: how do we maintain momentum without breaking the things that matter? The aim isn't to slow innovation unnecessarily, but to ensure progress happens at a pace that protects individuals and society. Responsible actors should not be disadvantaged -- yet safeguards are essential to maintain trust. For the latest video in our RWDS Big Questions series, our panel explores this delicate balance.


6 Graphs That Show Where the U.S. Leads China on AI--and Where It Doesn't

TIME - Tech

Two important things happened on January 20, 2025. In Washington, D.C., Donald Trump was inaugurated as President of the United States. In Hangzhou, China, a little-known Chinese firm called DeepSeek released R1, an AI model that industry watchers called a "Sputnik moment" for the country's AI industry. "Whether we like it or not, we're suddenly engaged in a fast-paced competition to build and define this groundbreaking technology that will determine so much about the future of civilization," said Trump later that year, as he announced his administration's AI action plan, which was titled "Winning the Race." There are many interpretations of what AI companies and their governments are racing towards, says AI policy researcher Lennart Heim: to deploy AI systems in the economy, to build robots, to create human-like artificial general intelligence.


Japan and ASEAN agree to cooperate on AI development

The Japan Times

Japanese internal affairs minister Yoshimasa Hayashi (center) poses for a photo with ministers from ASEAN member states in Hanoi on Thursday. HANOI - Japan and the Association of Southeast Asian Nations have agreed to work together on developing new artificial intelligence models and preparing related laws. The AI-sector cooperation was included in a joint statement adopted at a meeting of digital ministers from Japan and ASEAN member states in Hanoi on Thursday. The statement was proposed by Japanese communications minister Yoshimasa Hayashi, who attended the meeting. Japan and ASEAN aim to join hands at a time when the United States and China are boosting their presence in the AI sector.


Japanese government adopts first basic plan on AI

The Japan Times

The government at a Cabinet meeting Tuesday adopted its first basic plan on the development and utilization of artificial intelligence. The basic plan stipulates that Japan will create reliable AI while balancing technological innovation and risk management, with an aim to become a country that offers the best environment for AI development and utilization. Japan lags behind not only other advanced nations but also countries with smaller economies in terms of AI development, and the gap is becoming wider year by year, it warns. In a time of both misinformation and too much information, quality journalism is more crucial than ever. By subscribing, you can help us get the story right.


Five AI Developments That Changed Everything This Year

TIME - Tech

President Donald Trump speaks in the Roosevelt Room flanked by Masayoshi Son, Larry Ellison, and Sam Altman at the White House on January 21, 2025. President Donald Trump speaks in the Roosevelt Room flanked by Masayoshi Son, Larry Ellison, and Sam Altman at the White House on January 21, 2025. In case you missed it, 2025 was a big year for AI. It became an economic force, propping up the stock market, and a geopolitical pawn, redrawing the frontlines of Great Power competition. It had both global and deeply personal effects, changing the ways that we think, write, and relate.



The Gender Code: Gendering the Global Governance of Artificial Intelligence

Cupac, Jelena

arXiv.org Artificial Intelligence

This paper examines how international AI governance frameworks address gender issues and gender-based harms. The analysis covers binding regulations, such as the EU AI Act; soft law instruments, like the UNESCO Recommendations on AI Ethics; and global initiatives, such as the Global Partnership on AI (GPAI). These instruments reveal emerging trends, including the integration of gender concerns into broader human rights frameworks, a shift toward explicit gender-related provisions, and a growing emphasis on inclusivity and diversity. Yet, some critical gaps persist, including inconsistent treatment of gender across governance documents, limited engagement with intersectionality, and a lack of robust enforcement mechanisms. However, this paper argues that effective AI governance must be intersectional, enforceable, and inclusive. This is key to moving beyond tokenism toward meaningful equity and preventing reinforcement of existing inequalities. The study contributes to ethical AI debates by highlighting the importance of gender-sensitive governance in building a just technological future.


The Strange Disappearance of an Anti-AI Activist

The Atlantic - Technology

Sam Kirchner wants to save the world from artificial superintelligence. He's been missing for two weeks. B efore Sam Kirchner vanished, before the San Francisco Police Department began to warn that he could be armed and dangerous, before OpenAI locked down its offices over the potential threat, those who encountered him saw him as an ordinary, if ardent, activist. Phoebe Thomas Sorgen met Kirchner a few months ago at Travis Air Force Base, northeast of San Francisco, at a protest against immigration policy and U.S. military aid to Israel. Sorgen, a longtime activist whose first protests were against the Vietnam War, was going to block an entrance to the base with six other older women. Kirchner, 27 years old, was there with a couple of other members of a new group called Stop AI, and they all agreed to go along to record video on their phones in case of a confrontation with the police.


The Hidden AI Race: Tracking Environmental Costs of Innovation

Agarwal, Shyam, Chakraborti, Mahasweta

arXiv.org Artificial Intelligence

The past decade has seen a massive rise in the popularity of AI systems, mainly owing to the developments in Gen AI, which has revolutionized numerous industries and applications. However, this progress comes at a considerable cost to the environment as training and deploying these models consume significant computational resources and energy and are responsible for large carbon footprints in the atmosphere. In this paper, we study the amount of carbon dioxide released by models across different domains over varying time periods. By examining parameters such as model size, repository activity (e.g., commits and repository age), task type, and organizational affiliation, we identify key factors influencing the environmental impact of AI development. Our findings reveal that model size and versioning frequency are strongly correlated with higher emissions, while domain-specific trends show that NLP models tend to have lower carbon footprints compared to audio-based systems. Organizational context also plays a significant role, with university-driven projects exhibiting the highest emissions, followed by non-profits and companies, while community-driven projects show a reduction in emissions. These results highlight the critical need for green AI practices, including the adoption of energy-efficient architectures, optimizing development workflows, and leveraging renewable energy sources. We also discuss a few practices that can lead to a more sustainable future with AI, and we end this paper with some future research directions that could be motivated by our work. This work not only provides actionable insights to mitigate the environmental impact of AI but also poses new research questions for the community to explore. By emphasizing the interplay between sustainability and innovation, our study aims to guide future efforts toward building a more ecologically responsible AI ecosystem.